General Introduction
Generally, time series is defined as a collection of observations made sequentially over time or data that are collected at regular interval of time. Although the ordering is usually through time, particularly in terms of some equally spaced time intervals, the ordering may also be taken through other dimensions such as space known as frequency domain. Time series occur in a variety of fields. These observations are stochastic and are known to follow patterns based on time series theory. It is an important aspect of statistics that is well known for its descriptive capability, analysis, identification, and determination of stochastic models for the existing dynamic system as well as its uses in forecasting and monitoring of events. Among the components of series are the trend, seasonal movement, cyclical movement, irregular movement and outliers. Real data and databases may often include some erroneous parts. These situations, which damage the characteristics of data, are called “abnormal conditions,” and the values, which cause these “abnormal conditions,” are called outliers, Kaya (2010). The outliers, which are really independent, are the situations that cause the parameter estimation values in modelling to be subjective, they damage the processes even though they are set properly, and it is an obligation to destroy or to eliminate the effects. A commonly used definition of outliers is that they are minority of observations in a datasets that have different patterns from that of the majority of observations in the dataset or are observations, which deviate so much from other observations as to arouse suspicious that they were generated by a different mechanism, Harkins (1980). Outlier can also be defined as observations that appear to be inconsistent with the remainder of the data set, Betnett and Lewis (1994). Another definition is that outliers are minority of observations in a datasets that have different patterns from that of the majority of observations in the dataset. The assumption here is that there is a core of at least 50% of observations in a dataset that are homogenous (that is, represented by a common pattern) and the remaining observations (hopefully few) have patterns that are inconsistent with this common pattern. 3 Identification of outlying data points is often by itself the primary goal, without any intention of fitting a statistical model. The outliers themselves are points of primary interest, drawing attention to unknown aspects of data, or especially if unexpected, leading to new discoveries. On human angle, in the September 11, 2001 attacks on World Trade Centers in New York, United State of America, 5 out of the 80 passengers on one of the flights displayed unusual characteristics. These five passengers (outliers) were not U.S. citizens but had lived in the USA for some periods of time, were citizens of a particular foreign country, had all purchased one-way tickets, had purchased these tickets at the gate with cash rather than credit cards, and did not have any checked luggage. One or two of these characteristics might not be very unusual, but taken together, could be seen as markedly different from the majority of airline passengers. Also unauthorized computer network intrusions could also be seen as outliers, whereby the intruder exhibits a combination of characteristics that jointly considered, are different from typical network users. Perpetrators of credit card fraud provide yet another example where identification of outliers is critical and where the transaction database needs to be analyzed with the specific purpose of identifying unusual transactions. These examples demonstrate the need for outlier identification on every kind of datasets. The essence of outlier detection is to discover the unusual data, whose behaviour is very exceptional when compared to the rest of the data set. Examining the extraordinary behaviour of outliers helps to uncover the valuable knowledge hidden behind them and to help the decision makers to improve on the quality of data. Detection methods are divided into two parts: univariate and multivariate methods. In univariate methods, observations are examined individually while in multivariate methods, associations between variables in the same dataset are taken into account. Different types of outliers such as additive and innovation outliers were studied by Tsay et. al (2000). A graphical method was explored by Khattree and Naik (1987). Grossi (1999) proposed a leave-k-out diagnostic procedure while Bayesian analysis was performed by Barnett (1978). The problem of outlier detection in time series has gained much attention as far back as early 1970s and various methods are available. For this reason, several outlier detection, and robust estimation procedures have been proposed in the literature for time series analysis.
Abstract: The topic of this research is the impact of family support on adult education success. This study aimed to explore how support from...
Background of the Study
The role of mass media in fostering environmental awareness and consciousness among the youth in...
ABSTRACT
The focus of this study aimed at assessing the effect of fuel price hike on Business...
BACKGROUND OF THE STUDY
Bank as a financial institution is p...
ABSTRACT
This research examines “the review of motivation as a management tool for increasing the productivity of employees, a case...
ABSTRACT
This project was centered on social network website. It was observed that connecting socially...
Abstract
This study investigated the concept of fake news on social media and how it is being used as a tool in the understanding of COVI...
Abstract:
The objectives of this research are to (1) explore the impact of strategic human resource manage...
Background of the study
Public Relation is the art and social science of analyzing trends predicting th...
ABSTRACT
The mass media may have a crucial role in either reinforcing, or challenging such inequalities. Television, rad...